Goto

Collaborating Authors

 adversarial fisher vector


Adversarial Fisher Vectors for Unsupervised Representation Learning

Neural Information Processing Systems

We examine Generative Adversarial Networks (GANs) through the lens of deep Energy Based Models (EBMs), with the goal of exploiting the density model that follows from this formulation. In contrast to a traditional view where the discriminator learns a constant function when reaching convergence, here we show that it can provide useful information for downstream tasks, e.g., feature extraction for classification. To be concrete, in the EBM formulation, the discriminator learns an unnormalized density function (i.e., the negative energy term) that characterizes the data manifold. We propose to evaluate both the generator and the discriminator by deriving corresponding Fisher Score and Fisher Information from the EBM. We show that by assuming that the generated examples form an estimate of the learned density, both the Fisher Information and the normalized Fisher Vectors are easy to compute. We also show that we are able to derive a distance metric between examples and between sets of examples. We conduct experiments showing that the GAN-induced Fisher Vectors demonstrate competitive performance as unsupervised feature extractors for classification and perceptual similarity tasks. Code is available at \url{https://github.com/apple/ml-afv}.


Reviews: Adversarial Fisher Vectors for Unsupervised Representation Learning

Neural Information Processing Systems

This paper continues along a thread in the literature linking GANs and deep energy-based models, the basic idea being that the discriminator can represent an energy function for the distribution and the generator a sampler for the same; this allows, among other things, a sampling approximation of the negative phase term (the gradient of the partition function) using samples from the generator. Taking this view, the manuscript under consideration proposes to leverage the gradient of the discriminator's parameters to produce both Fisher vectors and a (diagonal approximation to the) Fisher information matrix for the model distribution. This allows for a powerful form of unsupervised representation learning, an induced distance metric (both between points and between sets of points, by applying the distance measure to the means of the sets). Overall, I feel this is a solid piece of generative model research. It proposes a fresh take on well-worn territory, makes several principled contributions as regards training methodology, and empirical demonstrate the method's usefulness, in particular a classification result from unsupervised representation learning that is quite impressive.


Adversarial Fisher Vectors for Unsupervised Representation Learning

Neural Information Processing Systems

We examine Generative Adversarial Networks (GANs) through the lens of deep Energy Based Models (EBMs), with the goal of exploiting the density model that follows from this formulation. In contrast to a traditional view where the discriminator learns a constant function when reaching convergence, here we show that it can provide useful information for downstream tasks, e.g., feature extraction for classification. To be concrete, in the EBM formulation, the discriminator learns an unnormalized density function (i.e., the negative energy term) that characterizes the data manifold. We propose to evaluate both the generator and the discriminator by deriving corresponding Fisher Score and Fisher Information from the EBM. We show that by assuming that the generated examples form an estimate of the learned density, both the Fisher Information and the normalized Fisher Vectors are easy to compute.


Adversarial Fisher Vectors for Unsupervised Representation Learning

Zhai, Shuangfei, Talbott, Walter, Guestrin, Carlos, Susskind, Joshua

Neural Information Processing Systems

We examine Generative Adversarial Networks (GANs) through the lens of deep Energy Based Models (EBMs), with the goal of exploiting the density model that follows from this formulation. In contrast to a traditional view where the discriminator learns a constant function when reaching convergence, here we show that it can provide useful information for downstream tasks, e.g., feature extraction for classification. To be concrete, in the EBM formulation, the discriminator learns an unnormalized density function (i.e., the negative energy term) that characterizes the data manifold. We propose to evaluate both the generator and the discriminator by deriving corresponding Fisher Score and Fisher Information from the EBM. We show that by assuming that the generated examples form an estimate of the learned density, both the Fisher Information and the normalized Fisher Vectors are easy to compute.